Skip to main content

Get ready to waste your day with this creepily accurate text-generating A.I.

Whether you believe it was one of the most dangerous versions of artificial intelligence created or dismiss it as a massive unnecessary PR exercise, there’s no doubt that the GPT-2 algorithm created by research lab OpenA.I. caused a lot of buzz when it was announced earlier this year.

Revealed in February, OpenA.I. said it developed an algorithm too dangerous to release to the general public. Although only a text generator, GPT-2 supposedly generated text so crazily humanlike that it could convince people that they were reading a real text written by an actual person. To use it, all a user had to do would be to feed in the start of the document, and then let the A.I. take over to complete it. Give it the opening of a newspaper story, and it would even manufacture fictitious “quotes.” Predictably, news media went into overdrive describing this as the terrifying new face of fake news. And for potentially good reason.

Jump forward a few months, and users can now have a go at using the A.I. for themselves. The algorithm appears on a website, called “Talk to Transformer,” hosted by machine learning engineer Adam King.

“For now OpenA.I. has decided only to release small and medium-sized versions of it which aren’t as coherent but still produce interesting results,” he writes on his website. “This site runs the new (May 3) medium-sized model, called 345M for the 345 million parameters it uses. If and when [OpenA.I.] release the full model, I’ll likely get it running here.

On a high level, GPT-2 doesn’t work all that differently from the predictive mobile keyboards which predict the word that you’re going to want to write next. However, as King notes, “While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks like translating between languages and answering questions. That’s without ever being told that it would be evaluated on those tasks.”

The results are, frankly, little unnerving. Although it’s still prone to the odd bit of A.I.-generated nonsense, it’s nowhere near the level of silliness as the various neural nets used to generate chapters from new A Song of Ice and Fire novels or monologs from Scrubs. Faced with the first paragraph of this story, for instance, it did a pretty serviceable job at turning out something convincing — complete with a bit of subject matter knowledge to help sell the effect.

Thinking that this is the Skynet of fake news is probably going a bit far. But it’s definitely enough to send a small shiver down the spine.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
New ‘poisoning’ tool spells trouble for AI text-to-image tech
Profile of head on computer chip artificial intelligence.

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn't involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
This powerful ChatGPT feature is back from the dead — with a few key changes
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

ChatGPT has just regained the ability to browse the internet to help you find information. That should (hopefully) help you get more accurate, up-to-date data right when you need it, rather than solely relying on the artificial intelligence (AI) chatbot’s rather outdated training data.

As well as giving straight-up answers to your questions based on info found online, ChatGPT developer OpenAI revealed that the tool will provide a link to its sources so you can check the facts yourself. If it turns out that ChatGPT was wrong or misleading, well, that’s just another one for the chatbot’s long list of missteps.

Read more